Constantine Province
SweetDeep: A Wearable AI Solution for Real-Time Non-Invasive Diabetes Screening
Henriques, Ian, Elhassar, Lynda, Relekar, Sarvesh, Walrave, Denis, Hassantabar, Shayan, Ghanakota, Vishu, Laoui, Adel, Aich, Mahmoud, Tir, Rafia, Zerguine, Mohamed, Louafi, Samir, Kimouche, Moncef, Cosson, Emmanuel, Jha, Niraj K
The global rise in type 2 diabetes underscores the need for scalable and cost-effective screening methods. Current diagnosis requires biochemical assays, which are invasive and costly. Advances in consumer wearables have enabled early explorations of machine learning-based disease detection, but prior studies were limited to controlled settings. We present SweetDeep, a compact neural network trained on physiological and demographic data from 285 (diabetic and non-diabetic) participants in the EU and MENA regions, collected using Samsung Galaxy Watch 7 devices in free-living conditions over six days. Each participant contributed multiple 2-minute sensor recordings per day, totaling approximately 20 recordings per individual. Despite comprising fewer than 3,000 parameters, SweetDeep achieves 82.5% patient-level accuracy (82.1% macro-F1, 79.7% sensitivity, 84.6% specificity) under three-fold cross-validation, with an expected calibration error of 5.5%. Allowing the model to abstain on less than 10% of low-confidence patient predictions yields an accuracy of 84.5% on the remaining patients. These findings demonstrate that combining engineered features with lightweight architectures can support accurate, rapid, and generalizable detection of type 2 diabetes in real-world wearable settings.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- Europe > San Marino > Fiorentino > Fiorentino (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Africa > Middle East > Algeria > Constantine Province > Constantine (0.04)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.66)
AI-VaxGuide: An Agentic RAG-Based LLM for Vaccination Decisions
Zeggai, Abdellah, Traikia, Ilyes, Lakehal, Abdelhak, Boulesnane, Abdennour
Vaccination plays a vital role in global public health, yet healthcare professionals often struggle to access immunization guidelines quickly and efficiently. National protocols and WHO recommendations are typically extensive and complex, making it difficult to extract precise information, especially during urgent situations. This project tackles that issue by developing a multilingual, intelligent question-answering system that transforms static vaccination guidelines into an interactive and user-friendly knowledge base. Built on a Retrieval-Augmented Generation (RAG) framework and enhanced with agent-based reasoning (Agentic RAG), the system provides accurate, context-sensitive answers to complex medical queries. Evaluation shows that Agentic RAG outperforms traditional methods, particularly in addressing multi-step or ambiguous questions. To support clinical use, the system is integrated into a mobile application designed for real-time, point-of-care access to essential vaccine information. AI-VaxGuide model is publicly available on https://huggingface.co/VaxGuide
- Africa > Middle East > Algeria > Constantine Province > Constantine (0.04)
- North America > United States > Massachusetts (0.04)
- Asia > Japan (0.04)
- (5 more...)
- Health & Medicine > Therapeutic Area > Vaccines (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
Boosting KNNClassifier Performance with Opposition-Based Data Transformation
In this paper, we introduce a novel data transformation framework based on Opposition-Based Learning (OBL) to boost the performance of traditional classification algorithms. Originally developed to accelerate convergence in optimization tasks, OBL is leveraged here to generate synthetic opposite samples that enrich the training data and improve decision boundary formation. We explore three OBL variants Global OBL, Class-Wise OBL, and Localized Class-Wise OBL and integrate them with K-Nearest Neighbors (KNN). Extensive experiments conducted on 26 heterogeneous and high-dimensional datasets demonstrate that OBL-enhanced classifiers consistently outperform the basic KNN. These findings underscore the potential of OBL as a lightweight yet powerful data transformation strategy for enhancing classification performance, especially in complex or sparse learning environments.
- North America > United States > New York (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Africa > Middle East > Algeria > Constantine Province > Constantine (0.04)
An Evolutionary Large Language Model for Hallucination Mitigation
Boulesnane, Abdennour, Souilah, Abdelhakim
The emergence of LLMs, like ChatGPT and Gemini, has marked the modern era of artificial intelligence applications characterized by high-impact applications generating text, images, and videos. However, these models usually ensue with one critical challenge called hallucination: confident presentation of inaccurate or fabricated information. This problem attracts serious concern when these models are applied to specialized domains, including healthcare and law, where the accuracy and preciseness of information are absolute conditions. In this paper, we propose EvoLLMs, an innovative framework inspired by Evolutionary Computation, which automates the generation of high-quality Question-answering (QA) datasets while minimizing hallucinations. EvoLLMs employs genetic algorithms, mimicking evolutionary processes like selection, variation, and mutation, to guide LLMs in generating accurate, contextually relevant question-answer pairs. Comparative analysis shows that EvoLLMs consistently outperforms human-generated datasets in key metrics such as Depth, Relevance, and Coverage, while nearly matching human performance in mitigating hallucinations. These results highlight EvoLLMs as a robust and efficient solution for QA dataset generation, significantly reducing the time and resources required for manual curation.
- Asia > Singapore (0.04)
- Africa > Middle East > Algeria > Constantine Province > Constantine (0.04)
- Overview (0.93)
- Research Report > Promising Solution (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Evolutionary Systems (1.00)
Uterine Ultrasound Image Captioning Using Deep Learning Techniques
Boulesnane, Abdennour, Mokhtari, Boutheina, Segueni, Oumnia Rana, Segueni, Slimane
Medical imaging has significantly revolutionized medical diagnostics and treatment planning, progressing from early X-ray usage to sophisticated methods like MRIs, CT scans, and ultrasounds. This paper investigates the use of deep learning for medical image captioning, with a particular focus on uterine ultrasound images. These images are vital in obstetrics and gynecology for diagnosing and monitoring various conditions across different age groups. However, their interpretation is often challenging due to their complexity and variability. To address this, a deep learning-based medical image captioning system was developed, integrating Convolutional Neural Networks with a Bidirectional Gated Recurrent Unit network. This hybrid model processes both image and text features to generate descriptive captions for uterine ultrasound images. Our experimental results demonstrate the effectiveness of this approach over baseline methods, with the proposed model achieving superior performance in generating accurate and informative captions, as indicated by higher BLEU and ROUGE scores. By enhancing the interpretation of uterine ultrasound images, our research aims to assist medical professionals in making timely and accurate diagnoses, ultimately contributing to improved patient care.
- Africa > Middle East > Algeria > Constantine Province > Constantine (0.05)
- North America > United States > New York (0.04)
- Africa > Middle East > Algeria > Mila Province > Mila (0.04)
- Africa > Middle East > Algeria > Ghardaïa Province > Ghardaïa (0.04)
ARMAS: Active Reconstruction of Missing Audio Segments
Cheddad, Zohra, Cheddad, Abbas
Digital audio signal reconstruction of a lost or corrupt segment using deep learning algorithms has been explored intensively in recent years. Nevertheless, prior traditional methods with linear interpolation, phase coding and tone insertion techniques are still in vogue. However, we found no research work on reconstructing audio signals with the fusion of dithering, steganography, and machine learning regressors. Therefore, this paper proposes the combination of steganography, halftoning (dithering), and state-of-the-art shallow (RF- Random Forest regression) and deep learning (LSTM- Long Short-Term Memory) methods. The results (including comparing the SPAIN, Autoregressive, deep learning-based, graph-based, and other methods) are evaluated with three different metrics. The observations from the results show that the proposed solution is effective and can enhance the reconstruction of audio signals performed by the side information (e.g., Latent representation and learning for audio inpainting) steganography provides. Moreover, this paper proposes a novel framework for reconstruction from heavily compressed embedded audio data using halftoning (i.e., dithering) and machine learning, which we termed the HCR (halftone-based compression and reconstruction). This work may trigger interest in optimising this approach and/or transferring it to different domains (i.e., image reconstruction). Compared to existing methods, we show improvement in the inpainting performance in terms of signal-to-noise (SNR), the objective difference grade (ODG) and the Hansen's audio quality metric.
- Europe > Spain (0.25)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (4 more...)
The Archerfish Hunting Optimizer: a novel metaheuristic algorithm for global optimization
Zitouni, Farouq, Harous, Saad, Belkeram, Abdelghani, Hammou, Lokman Elhakim Baba
Global optimization solves real-world problems numerically or analytically by minimizing their objective functions. Most of the analytical algorithms are greedy and computationally intractable. Metaheuristics are nature-inspired optimization algorithms. They numerically find a near-optimal solution for optimization problems in a reasonable amount of time. We propose a novel metaheuristic algorithm for global optimization. It is based on the shooting and jumping behaviors of the archerfish for hunting aerial insects. We name it the Archerfish Hunting Optimizer (AHO). We Perform two sorts of comparisons to validate the proposed algorithm's performance. First, AHO is compared to the 12 recent metaheuristic algorithms (the accepted algorithms for the 2020's competition on single objective bound-constrained numerical optimization) on ten test functions of the benchmark CEC 2020 for unconstrained optimization. Second, the performance of AHO and 3 recent metaheuristic algorithms, is evaluated using five engineering design problems taken from the benchmark CEC 2020 for non-convex constrained optimization. The experimental results are evaluated using the Wilcoxon signed-rank and the Friedman tests. The statistical indicators illustrate that the Archerfish Hunting Optimizer has an excellent ability to accomplish higher performance in competition with the well-established optimizers.
- Africa > Middle East > Algeria > Ouargla Province > Ouargla (0.04)
- Asia > China > Henan Province > Zhengzhou (0.04)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- (4 more...)
African scientists take on new ATLAS machine-learning challenge ATLAS Experiment at CERN
Cirta is a new machine-learning challenge for high-energy physics on Zindi, the Africa-based data-science challenge platform. Launched this autumn at the International Conference on High Energy and Astroparticle Physics (TIC-HEAP), Constantine, Algeria, Cirta challenges participants to provide machine-learning solutions for identifying particles in LHC experiment data. Cirta* is the first particle-physics challenge to specifically target computer scientists in Africa, and puts the public TrackML challenge dataset to new use. Created by ATLAS computer scientists Sabrina Amrouche and Dalila Salamani, the Cirta challenge aims to bring new blood into the growing field of machine learning for particle physics. "Zindi has a strong community of computer scientists based on the continent, and we're looking forward to reviewing their creative solutions to the challenge," says Salamani.
Towards more accurate clustering method by using dynamic time warping
An intrinsic problem of classifiers based on machine learning (ML) methods is that their learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM) algorithm to develop an efficient Hidden Markov Model (HMM) training. The idea of the proposed process consists of two steps. In the first step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity function to cluster similar examples. In the second step, all formulas in the classical HMM training algorithm (EM) associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.
- North America > United States (0.04)
- North America > Canada (0.04)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- (7 more...)